A Brain tumor is considered as one of the aggressive diseases, among children and adults. Application of automated classification techniques using Machine Learning(ML) and Artificial Intelligence(AI)has consistently shown higher accuracy than manual classification. Hence, proposing a system performing detection and classification by using Deep Learning Algorithms using ConvolutionNeural Network (CNN), Artificial Neural Network (ANN), and TransferLearning (TL) would be helpful to doctors all around the world.
The gold here is to identify tumor type among 'glioma_tumor','no_tumor','meningioma_tumor','pituitary_tumor'.
To Detect and Classify Brain Tumor using, CNN and TL; as an asset of Deep Learning and to examine the tumor position(segmentation).
To start, Tensorflow CNN-based Brain Tumor Detection will be used. I'll investigate both augmented and unaugmented models, providing insights into effective tumor detection.
Transfer Learning and use pre-trained models like EfficientNetB0, ResNet101, and Xception will be investigated also.
import numpy as np
import os
import keras
import pandas as pd
import plotly.graph_objects as go
import plotly.subplots as sp
import plotly.express as px
import matplotlib.colors
import seaborn as sns
import matplotlib.pyplot as plt
from skimage.transform import resize
from sklearn.utils import shuffle
from tensorflow.keras.utils import to_categorical
### Creating the CNN Model
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras.layers import Input, Dense, InputLayer, Flatten, Conv2D, MaxPooling2D, Dropout
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, BatchNormalization
from keras.models import Model, Sequential
from tensorflow.keras.models import Sequential, load_model
from keras import metrics
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
# Building Model
from keras.utils import plot_model
from tensorflow.keras import models
# Training Model
from tensorflow.keras import layers, models, regularizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam, SGD, RMSprop
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
C:\Users\yanch\anaconda3\Lib\site-packages\paramiko\transport.py:219: CryptographyDeprecationWarning: Blowfish has been deprecated "class": algorithms.Blowfish,
colors_dark = ["#1F1F1F", "#313131", '#636363', '#AEAEAE', '#DADADA']
colors_red = ["#331313", "#582626", '#9E1717', '#D35151', '#E9B4B4']
colors_green = ['#01411C','#4B6F44','#4F7942','#74C365','#D0F0C0']
base_dir = 'C:\\Users\\yanch\\Desktop\\UC\\Classes\\2024 Spring\\ADSP 31009 Machine Learning and Predictive Analytics\\Final Project'
train_dir = os.path.join(base_dir, 'Training')
test_dir = os.path.join(base_dir, 'Testing')
labels = ['glioma_tumor', 'meningioma_tumor', 'no_tumor', 'pituitary_tumor']
from skimage.transform import resize
X_train = [] #Training Dataset
Y_train = [] #Training Labels
image_size=224
for label in labels:
path = os.path.join(train_dir, label)
class_num = labels.index(label)
for img in os.listdir(path):
img_array = plt.imread(os.path.join(path, img))
img_resized = resize(img_array, (image_size, image_size, 3))
X_train.append(img_resized)
Y_train.append(class_num)
for label in labels:
path = os.path.join(test_dir, label)
class_num = labels.index(label)
for img in os.listdir(path):
img_array = plt.imread(os.path.join(path, img))
img_resized = resize(img_array, (image_size, image_size, 3))
X_train.append(img_resized)
Y_train.append(class_num)
X_train = np.array(X_train)
Y_train = np.array(Y_train)
# Data generators
train_datagen = ImageDataGenerator(rescale=1/255,
rotation_range=90,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip=True,
validation_split=0.2)
valid_datagen = ImageDataGenerator(rescale=1/255, validation_split=0.2)
train_generator=train_datagen.flow_from_directory(train_dir,
target_size=(224,224), color_mode='rgb', shuffle=True,
subset='training', batch_size=32, class_mode='categorical')
val_generator = valid_datagen.flow_from_directory(train_dir,
target_size=(224,224), color_mode='rgb', shuffle=True,
subset='validation',batch_size=32,class_mode='categorical')
Found 2297 images belonging to 4 classes. Found 573 images belonging to 4 classes.
X_train.shape
(3264, 224, 224, 3)
# Shuffling data
X_train, Y_train = shuffle(X_train, Y_train, random_state=42)
#After shuffling sample size remains same
X_train.shape
(3264, 224, 224, 3)
# This method uses the classes array, which directly indicates the class index for each image
(unique, counts) = np.unique(train_generator.classes, return_counts=True)
class_counts = dict(zip(unique, counts))
# Mapping index to class names
class_names = {v: k for k, v in train_generator.class_indices.items()}
class_counts_named = {class_names[k]: v for k, v in class_counts.items()}
# Plotting
plt.figure(figsize=(10, 5))
plt.bar(class_counts_named.keys(), class_counts_named.values())
plt.title('Distribution of Classes in Training Data')
plt.xlabel('Class')
plt.ylabel('Number of Images')
plt.xticks(rotation=45)
plt.show()
Analyzing the distribution of pixel intensities can help in understanding the general characteristics of the images, like contrast and brightness, and might suggest necessary preprocessing steps like histogram equalization.
fig, ax = plt.subplots()
for img in X_train[:5]: # Use the same images from the first batch
sns.histplot(img.ravel(), label='Pixel Intensity', ax=ax, kde=True)
ax.set_title('Pixel Intensity Distribution')
ax.legend()
plt.show()
#plotting the images
plt.figure(figsize=(20,20))
for i in range(16):
plt.subplot(4,4,i+1)
plt.imshow(X_train[i])
plt.title(labels[Y_train[i]], fontsize=16, fontweight='bold')
plt.axis("off")
plt.show()
# Split the data into training and testing and validation
X_train, X_test, Y_train, Y_test = train_test_split(X_train, Y_train, test_size=0.2, random_state=42)
X_train, X_valid, Y_train, Y_valid = train_test_split(X_train, Y_train, test_size=0.1, random_state=42)
print(X_train.shape)
print(X_valid.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
print(Y_valid.shape)
(2349, 224, 224, 3) (262, 224, 224, 3) (653, 224, 224, 3) (2349,) (653,) (262,)
import warnings
# Count the number of images in each class
class_counts = np.bincount(Y_train)
class_names = ['glioma', 'meningioma', 'no tumor', 'pituitary']
# Create a DataFrame with class names and counts
train_df = pd.DataFrame({'Class': class_names, 'Count': class_counts})
# Create a histogram for the train labels using Plotly Express
fig = px.bar(train_df, y='Class', x='Count', color='Class', template='plotly_dark',
title='\nNumber of Images in Each Class of the Train Data', orientation='h')
# Update hover template to display count and label
fig.update_traces(hovertemplate='Count: %{x}<br>Class: %{y}')
# Update layout with custom styles
fig.update_layout(title_font=dict(color='white'),
legend=dict(font=dict(color='white')),
showlegend=False) # Hide legend
# Set the template style to 'plotly_dark'
fig.update_layout(template='plotly_dark')
# Show the plot
fig.show()
# Count the number of images in each class
class_counts = np.bincount(Y_test)
class_names = ['glioma', 'meningioma', 'no tumor', 'pituitary']
# Create a DataFrame with class names and counts
train_df = pd.DataFrame({'Class': class_names, 'Count': class_counts})
# Create a histogram for the train labels using Plotly Express
fig = px.bar(train_df, y='Class', x='Count', color='Class', template='plotly_dark',
title='\nNumber of Images in Each Class of the Test Data', orientation='h')
# Update hover template to display count and label
fig.update_traces(hovertemplate='Count: %{x}<br>Class: %{y}')
# Update layout with custom styles
fig.update_layout(title_font=dict(color='white'),
legend=dict(font=dict(color='white')),
showlegend=False) # Hide legend
# Set the template style to 'plotly_dark'
fig.update_layout(template='plotly_dark')
# Show the plot
fig.show()
# convert string to categorical
from keras.utils import to_categorical
y_train_new = []
y_valid_new = []
y_test_new = []
for i in range(len(Y_train)):
y_train_new.append(to_categorical(Y_train[i], num_classes=4))
for i in range(len(Y_valid)):
y_valid_new.append(to_categorical(Y_valid[i], num_classes=4))
for i in range(len(Y_test)):
y_test_new.append(to_categorical(Y_test[i], num_classes=4))
y_train_new = np.array(y_train_new)
y_valid_new = np.array(y_valid_new)
y_test_new = np.array(y_test_new)
It initializes the model to accept input images of size (image_size, image_size, 3), which corresponds to image height, image width, and 3 color channels (RGB).
Conv2D with 16 filters: Applies a 5x5 convolution kernel to extract features such as edges and textures. The use of 16 filters means it will output 16 different feature maps.
BatchNormalization: Normalizes the activations from the previous layer, which helps in accelerating the training process and stabilizing the learning by normalizing the input layer by re-centering and re-scaling.
MaxPooling2D: Reduces the spatial dimensions (height and width) of the input volume to the next layer by taking the maximum value over a 2x2 pooling window. This helps in reducing the computational cost and overfitting by providing an abstracted form of the representation.
Dropout (0.2): Randomly sets the outgoing edges of 20% of the neurons to zero during training, to prevent overfitting.
These blocks increase in the number of filters (32, 64, 128, 256). Increasing the number of filters allows the network to capture more complex patterns like textures and shapes.
Each block follows a similar structure: a convolution layer, batch normalization, max pooling, and dropout. This repeated structure helps the network in learning hierarchically more complex features at each level.
Kernel sizes are generally smaller (3x3) in subsequent layers, which is common as deeper layers capture higher-level abstract features where finer granularity is less important.
The output of the final convolutional layer is flattened (converted from a matrix to a vector), so it can be fed into the dense layers.
Dense Layer with 512 neurons: This layer is fully connected and uses ReLU activation. It serves as a classifier on the features formed by the convolutions and pooling layers. Dropout (0.2): Again used here to reduce overfitting.
Dense layer with 4 neurons: This implies the model is intended for a classification task with 4 classes. The softmax activation function is used to output a probability distribution over the 4 classes.
The model uses the Adam optimizer, a popular choice for deep learning tasks as it combines the best properties of the AdaGrad and RMSProp algorithms to optimize its weights.
The loss function is categorical_crossentropy, suitable for multi-class classification problems.
The metric used to evaluate the model is accuracy.
#simple CNN per with augment
model = Sequential()
model.add(InputLayer(input_shape=(image_size, image_size,3)))
model.add(Conv2D(16, kernel_size=(5, 5), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(256, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
C:\Users\yanch\anaconda3\Lib\site-packages\keras\src\layers\core\input_layer.py:25: UserWarning: Argument `input_shape` is deprecated. Use `shape` instead.
Model: "sequential_2"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ conv2d_10 (Conv2D) │ (None, 220, 220, 16) │ 1,216 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ batch_normalization_4 │ (None, 220, 220, 16) │ 64 │ │ (BatchNormalization) │ │ │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_10 (MaxPooling2D) │ (None, 110, 110, 16) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_10 (Dropout) │ (None, 110, 110, 16) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_11 (Conv2D) │ (None, 108, 108, 32) │ 4,640 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ batch_normalization_5 │ (None, 108, 108, 32) │ 128 │ │ (BatchNormalization) │ │ │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_11 (MaxPooling2D) │ (None, 54, 54, 32) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_12 (Conv2D) │ (None, 52, 52, 64) │ 18,496 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_12 (MaxPooling2D) │ (None, 26, 26, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_11 (Dropout) │ (None, 26, 26, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_13 (Conv2D) │ (None, 24, 24, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_13 (MaxPooling2D) │ (None, 12, 12, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_12 (Dropout) │ (None, 12, 12, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_14 (Conv2D) │ (None, 10, 10, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_14 (MaxPooling2D) │ (None, 5, 5, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_13 (Dropout) │ (None, 5, 5, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_2 (Flatten) │ (None, 6400) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_4 (Dense) │ (None, 512) │ 3,277,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_14 (Dropout) │ (None, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_5 (Dense) │ (None, 4) │ 2,052 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 3,672,932 (14.01 MB)
Trainable params: 3,672,836 (14.01 MB)
Non-trainable params: 96 (384.00 B)
The disparity between training and validation/test accuracy along with the low test scores suggests the model may be underfitting, as it does not perform well on any of the datasets.
history = model.fit(X_train, y_train_new,
batch_size=64,
epochs=10,
steps_per_epoch=5,
validation_data=(X_valid, y_valid_new))
Epoch 1/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9951 - loss: 0.0164 - val_accuracy: 0.9389 - val_loss: 0.5294 Epoch 2/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9983 - loss: 0.0069 - val_accuracy: 0.9313 - val_loss: 0.5413 Epoch 3/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9803 - loss: 0.0544 - val_accuracy: 0.9351 - val_loss: 0.5765 Epoch 4/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9889 - loss: 0.0235 - val_accuracy: 0.9389 - val_loss: 0.6128 Epoch 5/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9902 - loss: 0.0434 - val_accuracy: 0.9389 - val_loss: 0.6036 Epoch 6/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9746 - loss: 0.1208 - val_accuracy: 0.9351 - val_loss: 0.4841 Epoch 7/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9867 - loss: 0.0426 - val_accuracy: 0.9275 - val_loss: 0.5177 Epoch 8/10 2/5 ━━━━━━━━━━━━━━━━━━━━ 2s 700ms/step - accuracy: 0.9752 - loss: 0.0742
C:\Users\yanch\anaconda3\Lib\contextlib.py:155: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
5/5 ━━━━━━━━━━━━━━━━━━━━ 3s 405ms/step - accuracy: 0.9791 - loss: 0.0702 - val_accuracy: 0.9237 - val_loss: 0.4940 Epoch 9/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9889 - loss: 0.0358 - val_accuracy: 0.9084 - val_loss: 0.5404 Epoch 10/10 5/5 ━━━━━━━━━━━━━━━━━━━━ 6s 1s/step - accuracy: 0.9845 - loss: 0.0548 - val_accuracy: 0.9198 - val_loss: 0.5243
# Save the model
# this is baseline model with rotation range = 20
model.save('new_cnn_model_1.keras')
import matplotlib.pyplot as plt
# Plotting the training and validation loss
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss Over Epochs')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.savefig('training_validation_loss.png')
plt.show()
# Predict the val model
y_pred = model.predict(X_valid)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_valid, y_pred)
print('Val Accuracy = %.4f' % accuracy)
9/9 ━━━━━━━━━━━━━━━━━━━━ 1s 124ms/step Val Accuracy = 0.2672
# Predict the test model
y_pred = model.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_test, y_pred)
print('Test Accuracy = %.4f' % accuracy)
21/21 ━━━━━━━━━━━━━━━━━━━━ 3s 132ms/step Test Accuracy = 0.2450
print("Classification Report:\n",classification_report(Y_test, y_pred))
Classification Report:
precision recall f1-score support
0 0.00 0.00 0.00 219
1 0.20 0.01 0.01 187
2 0.17 0.80 0.29 87
3 0.36 0.56 0.44 160
accuracy 0.25 653
macro avg 0.18 0.34 0.18 653
weighted avg 0.17 0.25 0.15 653
C:\Users\yanch\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1469: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. C:\Users\yanch\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1469: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. C:\Users\yanch\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1469: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
Model Performance: The model demonstrates excellent learning capability and generalizes well to unseen data. The balance between training and validation performance suggests that the model configurations, including architecture adjustments, regularization techniques, or hyperparameters, are well-tuned.
Stability and Overfitting: The relatively smooth and convergent training and validation loss curves indicate that the model is stable and not overfitting. This is corroborated by the close tracking of validation loss to training loss.
The training and validation loss curves show a desirable behavior. Training loss steadily decreases, indicating good learning progress. Validation loss decreases alongside and remains close to the training loss, which is a good sign of the model not overfitting.
history = model.fit(X_train, y_train_new,
batch_size=64,
epochs=50,
steps_per_epoch=50,
validation_data=(X_valid, y_valid_new))
Epoch 1/50 37/50 ━━━━━━━━━━━━━━━━━━━━ 12s 981ms/step - accuracy: 0.5946 - loss: 0.9527
C:\Users\yanch\anaconda3\Lib\contextlib.py:155: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 750ms/step - accuracy: 0.6032 - loss: 0.9389 - val_accuracy: 0.4008 - val_loss: 1.2256 Epoch 2/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 738ms/step - accuracy: 0.7068 - loss: 0.7582 - val_accuracy: 0.6145 - val_loss: 1.0696 Epoch 3/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 730ms/step - accuracy: 0.7281 - loss: 0.6811 - val_accuracy: 0.5496 - val_loss: 1.0466 Epoch 4/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 731ms/step - accuracy: 0.7868 - loss: 0.5578 - val_accuracy: 0.6374 - val_loss: 0.9908 Epoch 5/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 743ms/step - accuracy: 0.8076 - loss: 0.4926 - val_accuracy: 0.6718 - val_loss: 0.8159 Epoch 6/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 738ms/step - accuracy: 0.8473 - loss: 0.4301 - val_accuracy: 0.6908 - val_loss: 0.8416 Epoch 7/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 732ms/step - accuracy: 0.8699 - loss: 0.3598 - val_accuracy: 0.7443 - val_loss: 0.6991 Epoch 8/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 487s 10s/step - accuracy: 0.8710 - loss: 0.3516 - val_accuracy: 0.7595 - val_loss: 0.6429 Epoch 9/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 724ms/step - accuracy: 0.8879 - loss: 0.2956 - val_accuracy: 0.8130 - val_loss: 0.5012 Epoch 10/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 726ms/step - accuracy: 0.9031 - loss: 0.2607 - val_accuracy: 0.8588 - val_loss: 0.4286 Epoch 11/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 746ms/step - accuracy: 0.9337 - loss: 0.2054 - val_accuracy: 0.8702 - val_loss: 0.4034 Epoch 12/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 735ms/step - accuracy: 0.9091 - loss: 0.2255 - val_accuracy: 0.9008 - val_loss: 0.3470 Epoch 13/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 780ms/step - accuracy: 0.9400 - loss: 0.1757 - val_accuracy: 0.8817 - val_loss: 0.4146 Epoch 14/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 753ms/step - accuracy: 0.9507 - loss: 0.1474 - val_accuracy: 0.8817 - val_loss: 0.4246 Epoch 15/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 754ms/step - accuracy: 0.9528 - loss: 0.1394 - val_accuracy: 0.9084 - val_loss: 0.3712 Epoch 16/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 747ms/step - accuracy: 0.9588 - loss: 0.1137 - val_accuracy: 0.9122 - val_loss: 0.3470 Epoch 17/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 732ms/step - accuracy: 0.9605 - loss: 0.1113 - val_accuracy: 0.9237 - val_loss: 0.3751 Epoch 18/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 734ms/step - accuracy: 0.9551 - loss: 0.1080 - val_accuracy: 0.8931 - val_loss: 0.3842 Epoch 19/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 748ms/step - accuracy: 0.9650 - loss: 0.0972 - val_accuracy: 0.9160 - val_loss: 0.3153 Epoch 20/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 37s 735ms/step - accuracy: 0.9733 - loss: 0.0816 - val_accuracy: 0.9237 - val_loss: 0.4023 Epoch 21/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 764ms/step - accuracy: 0.9799 - loss: 0.0745 - val_accuracy: 0.9313 - val_loss: 0.3908 Epoch 22/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 41s 817ms/step - accuracy: 0.9625 - loss: 0.0931 - val_accuracy: 0.9275 - val_loss: 0.3757 Epoch 23/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 41s 806ms/step - accuracy: 0.9834 - loss: 0.0574 - val_accuracy: 0.9313 - val_loss: 0.3387 Epoch 24/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 40s 793ms/step - accuracy: 0.9730 - loss: 0.0750 - val_accuracy: 0.9313 - val_loss: 0.2972 Epoch 25/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 40s 798ms/step - accuracy: 0.9832 - loss: 0.0579 - val_accuracy: 0.9160 - val_loss: 0.4324 Epoch 26/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 40s 764ms/step - accuracy: 0.9682 - loss: 0.1009 - val_accuracy: 0.9237 - val_loss: 0.4243 Epoch 27/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 756ms/step - accuracy: 0.9774 - loss: 0.0579 - val_accuracy: 0.9351 - val_loss: 0.3809 Epoch 28/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 768ms/step - accuracy: 0.9821 - loss: 0.0566 - val_accuracy: 0.9427 - val_loss: 0.3873 Epoch 29/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 759ms/step - accuracy: 0.9894 - loss: 0.0359 - val_accuracy: 0.9427 - val_loss: 0.4556 Epoch 30/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 757ms/step - accuracy: 0.9863 - loss: 0.0442 - val_accuracy: 0.9160 - val_loss: 0.4866 Epoch 31/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 760ms/step - accuracy: 0.9858 - loss: 0.0473 - val_accuracy: 0.9313 - val_loss: 0.4852 Epoch 32/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 783ms/step - accuracy: 0.9872 - loss: 0.0445 - val_accuracy: 0.9389 - val_loss: 0.4071 Epoch 33/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 41s 810ms/step - accuracy: 0.9916 - loss: 0.0268 - val_accuracy: 0.9198 - val_loss: 0.5157 Epoch 34/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 761ms/step - accuracy: 0.9872 - loss: 0.0383 - val_accuracy: 0.9275 - val_loss: 0.4700 Epoch 35/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 765ms/step - accuracy: 0.9849 - loss: 0.0430 - val_accuracy: 0.9160 - val_loss: 0.4828 Epoch 36/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 755ms/step - accuracy: 0.9837 - loss: 0.0496 - val_accuracy: 0.9275 - val_loss: 0.5223 Epoch 37/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 756ms/step - accuracy: 0.9831 - loss: 0.0462 - val_accuracy: 0.9389 - val_loss: 0.4216 Epoch 38/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 757ms/step - accuracy: 0.9848 - loss: 0.0453 - val_accuracy: 0.9427 - val_loss: 0.4807 Epoch 39/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 749ms/step - accuracy: 0.9832 - loss: 0.0521 - val_accuracy: 0.9198 - val_loss: 0.6065 Epoch 40/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 771ms/step - accuracy: 0.9840 - loss: 0.0441 - val_accuracy: 0.9389 - val_loss: 0.5364 Epoch 41/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 42s 829ms/step - accuracy: 0.9882 - loss: 0.0404 - val_accuracy: 0.9313 - val_loss: 0.4418 Epoch 42/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 777ms/step - accuracy: 0.9881 - loss: 0.0338 - val_accuracy: 0.9427 - val_loss: 0.4785 Epoch 43/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 760ms/step - accuracy: 0.9925 - loss: 0.0230 - val_accuracy: 0.9580 - val_loss: 0.4233 Epoch 44/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 780ms/step - accuracy: 0.9899 - loss: 0.0393 - val_accuracy: 0.9427 - val_loss: 0.4645 Epoch 45/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 764ms/step - accuracy: 0.9915 - loss: 0.0212 - val_accuracy: 0.9389 - val_loss: 0.5971 Epoch 46/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 761ms/step - accuracy: 0.9898 - loss: 0.0299 - val_accuracy: 0.9504 - val_loss: 0.5630 Epoch 47/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 39s 778ms/step - accuracy: 0.9895 - loss: 0.0299 - val_accuracy: 0.9427 - val_loss: 0.4841 Epoch 48/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 759ms/step - accuracy: 0.9892 - loss: 0.0384 - val_accuracy: 0.9466 - val_loss: 0.4732 Epoch 49/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 40s 786ms/step - accuracy: 0.9900 - loss: 0.0299 - val_accuracy: 0.9466 - val_loss: 0.4952 Epoch 50/50 50/50 ━━━━━━━━━━━━━━━━━━━━ 38s 763ms/step - accuracy: 0.9898 - loss: 0.0312 - val_accuracy: 0.9427 - val_loss: 0.5609
# Save the model
# this is baseline model with rotation range = 20
model.save('new_cnn_model_2.keras')
import matplotlib.pyplot as plt
# Plotting the training and validation loss
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss Over Epochs')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.savefig('training_validation_loss.png')
plt.show()
# Predict the val model
y_pred = model.predict(X_valid)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_valid, y_pred)
print('Val Accuracy = %.4f' % accuracy)
9/9 ━━━━━━━━━━━━━━━━━━━━ 1s 101ms/step Val Accuracy = 0.9427
# Predict the test model
y_pred = model.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_test, y_pred)
print('Test Accuracy = %.4f' % accuracy)
21/21 ━━━━━━━━━━━━━━━━━━━━ 2s 110ms/step Test Accuracy = 0.9173
print("Classification Report:\n",classification_report(Y_test, y_pred))
Classification Report:
precision recall f1-score support
0 0.88 0.94 0.91 219
1 0.95 0.82 0.88 187
2 0.92 0.92 0.92 87
3 0.94 0.99 0.96 160
accuracy 0.92 653
macro avg 0.92 0.92 0.92 653
weighted avg 0.92 0.92 0.92 653
Loss Stability Concerns: Despite excellent performance metrics, the variability in the validation loss could still be a concern. It may suggest that the model could start overfitting if trained for more epochs without adjustments.
history = model.fit(X_train, y_train_new,
batch_size=64,
epochs=50,
steps_per_epoch=100,
validation_data=(X_valid, y_valid_new))
Epoch 1/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 394ms/step - accuracy: 0.9861 - loss: 0.0356 - val_accuracy: 0.9313 - val_loss: 0.5516 Epoch 2/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 53s 515ms/step - accuracy: 0.9854 - loss: 0.0395 - val_accuracy: 0.9427 - val_loss: 0.4838 Epoch 3/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 49s 479ms/step - accuracy: 0.9863 - loss: 0.0466 - val_accuracy: 0.9389 - val_loss: 0.5167 Epoch 4/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 49s 481ms/step - accuracy: 0.9904 - loss: 0.0240 - val_accuracy: 0.9504 - val_loss: 0.4565 Epoch 5/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 48s 469ms/step - accuracy: 0.9896 - loss: 0.0271 - val_accuracy: 0.9351 - val_loss: 0.5200 Epoch 6/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 52s 515ms/step - accuracy: 0.9911 - loss: 0.0332 - val_accuracy: 0.9427 - val_loss: 0.4462 Epoch 7/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 427ms/step - accuracy: 0.9931 - loss: 0.0218 - val_accuracy: 0.9427 - val_loss: 0.4205 Epoch 8/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 394ms/step - accuracy: 0.9918 - loss: 0.0275 - val_accuracy: 0.9313 - val_loss: 0.4520 Epoch 9/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 45s 448ms/step - accuracy: 0.9854 - loss: 0.0446 - val_accuracy: 0.9237 - val_loss: 0.5280 Epoch 10/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 419ms/step - accuracy: 0.9859 - loss: 0.0413 - val_accuracy: 0.9313 - val_loss: 0.5361 Epoch 11/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 396ms/step - accuracy: 0.9908 - loss: 0.0263 - val_accuracy: 0.9466 - val_loss: 0.5306 Epoch 12/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 44s 432ms/step - accuracy: 0.9872 - loss: 0.0373 - val_accuracy: 0.9389 - val_loss: 0.4706 Epoch 13/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 426ms/step - accuracy: 0.9905 - loss: 0.0328 - val_accuracy: 0.9351 - val_loss: 0.4630 Epoch 14/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 406ms/step - accuracy: 0.9916 - loss: 0.0253 - val_accuracy: 0.9313 - val_loss: 0.5127 Epoch 15/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 414ms/step - accuracy: 0.9857 - loss: 0.0457 - val_accuracy: 0.9160 - val_loss: 0.5916 Epoch 16/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 422ms/step - accuracy: 0.9826 - loss: 0.0566 - val_accuracy: 0.9389 - val_loss: 0.4681 Epoch 17/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 418ms/step - accuracy: 0.9882 - loss: 0.0367 - val_accuracy: 0.9237 - val_loss: 0.4785 Epoch 18/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 407ms/step - accuracy: 0.9899 - loss: 0.0242 - val_accuracy: 0.9046 - val_loss: 0.7800 Epoch 19/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 51s 498ms/step - accuracy: 0.9911 - loss: 0.0288 - val_accuracy: 0.9427 - val_loss: 0.5952 Epoch 20/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 56s 549ms/step - accuracy: 0.9956 - loss: 0.0171 - val_accuracy: 0.9351 - val_loss: 0.6594 Epoch 21/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 53s 522ms/step - accuracy: 0.9913 - loss: 0.0264 - val_accuracy: 0.9427 - val_loss: 0.6845 Epoch 22/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 56s 555ms/step - accuracy: 0.9953 - loss: 0.0183 - val_accuracy: 0.9427 - val_loss: 0.5893 Epoch 23/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 55s 535ms/step - accuracy: 0.9968 - loss: 0.0114 - val_accuracy: 0.9427 - val_loss: 0.6818 Epoch 24/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 52s 512ms/step - accuracy: 0.9953 - loss: 0.0230 - val_accuracy: 0.9389 - val_loss: 0.6214 Epoch 25/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 54s 530ms/step - accuracy: 0.9949 - loss: 0.0188 - val_accuracy: 0.9351 - val_loss: 0.6380 Epoch 26/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 52s 510ms/step - accuracy: 0.9944 - loss: 0.0229 - val_accuracy: 0.9389 - val_loss: 0.7436 Epoch 27/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 53s 517ms/step - accuracy: 0.9942 - loss: 0.0166 - val_accuracy: 0.9389 - val_loss: 0.6644 Epoch 28/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 53s 526ms/step - accuracy: 0.9872 - loss: 0.0349 - val_accuracy: 0.9275 - val_loss: 0.7268 Epoch 29/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 53s 516ms/step - accuracy: 0.9936 - loss: 0.0243 - val_accuracy: 0.9466 - val_loss: 0.6768 Epoch 30/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 52s 515ms/step - accuracy: 0.9888 - loss: 0.0291 - val_accuracy: 0.9313 - val_loss: 0.5553 Epoch 31/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 52s 513ms/step - accuracy: 0.9836 - loss: 0.0569 - val_accuracy: 0.9313 - val_loss: 0.6402 Epoch 32/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 51s 503ms/step - accuracy: 0.9914 - loss: 0.0280 - val_accuracy: 0.9504 - val_loss: 0.5774 Epoch 33/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 399ms/step - accuracy: 0.9852 - loss: 0.0441 - val_accuracy: 0.9389 - val_loss: 0.4621 Epoch 34/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 412ms/step - accuracy: 0.9868 - loss: 0.0392 - val_accuracy: 0.9466 - val_loss: 0.4100 Epoch 35/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 54s 534ms/step - accuracy: 0.9945 - loss: 0.0211 - val_accuracy: 0.9427 - val_loss: 0.4357 Epoch 36/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 60s 598ms/step - accuracy: 0.9942 - loss: 0.0218 - val_accuracy: 0.9427 - val_loss: 0.5096 Epoch 37/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 48s 474ms/step - accuracy: 0.9911 - loss: 0.0231 - val_accuracy: 0.9504 - val_loss: 0.4844 Epoch 38/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 56s 552ms/step - accuracy: 0.9934 - loss: 0.0187 - val_accuracy: 0.9389 - val_loss: 0.5852 Epoch 39/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 412ms/step - accuracy: 0.9952 - loss: 0.0111 - val_accuracy: 0.9466 - val_loss: 0.6731 Epoch 40/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 419ms/step - accuracy: 0.9937 - loss: 0.0232 - val_accuracy: 0.9351 - val_loss: 0.7436 Epoch 41/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 406ms/step - accuracy: 0.9859 - loss: 0.0451 - val_accuracy: 0.9580 - val_loss: 0.4898 Epoch 42/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 52s 518ms/step - accuracy: 0.9888 - loss: 0.0285 - val_accuracy: 0.9351 - val_loss: 0.6509 Epoch 43/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 51s 496ms/step - accuracy: 0.9893 - loss: 0.0317 - val_accuracy: 0.9313 - val_loss: 0.4865 Epoch 44/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 50s 490ms/step - accuracy: 0.9880 - loss: 0.0427 - val_accuracy: 0.9466 - val_loss: 0.5625 Epoch 45/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 51s 506ms/step - accuracy: 0.9873 - loss: 0.0441 - val_accuracy: 0.9466 - val_loss: 0.4856 Epoch 46/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 53s 520ms/step - accuracy: 0.9909 - loss: 0.0250 - val_accuracy: 0.9275 - val_loss: 0.6252 Epoch 47/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 52s 510ms/step - accuracy: 0.9846 - loss: 0.0482 - val_accuracy: 0.9351 - val_loss: 0.5449 Epoch 48/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 50s 491ms/step - accuracy: 0.9914 - loss: 0.0350 - val_accuracy: 0.9313 - val_loss: 0.5463 Epoch 49/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 49s 485ms/step - accuracy: 0.9886 - loss: 0.0433 - val_accuracy: 0.9427 - val_loss: 0.6445 Epoch 50/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 46s 446ms/step - accuracy: 0.9814 - loss: 0.0612 - val_accuracy: 0.9237 - val_loss: 0.5988
# Save the model
model.save('new_cnn_model_3.keras')
import matplotlib.pyplot as plt
# Plotting the training and validation loss
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss Over Epochs')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.savefig('training_validation_loss.png')
plt.show()
# Predict the val model
y_pred = model.predict(X_valid)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_valid, y_pred)
print('Val Accuracy = %.4f' % accuracy)
9/9 ━━━━━━━━━━━━━━━━━━━━ 1s 99ms/step Val Accuracy = 0.9237
# Predict the test model
y_pred = model.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_test, y_pred)
print('Test Accuracy = %.4f' % accuracy)
21/21 ━━━━━━━━━━━━━━━━━━━━ 2s 109ms/step Test Accuracy = 0.9449
print("Classification Report:\n",classification_report(Y_test, y_pred))
Classification Report:
precision recall f1-score support
0 0.97 0.92 0.94 219
1 0.95 0.93 0.94 187
2 0.88 0.94 0.91 87
3 0.95 1.00 0.97 160
accuracy 0.94 653
macro avg 0.94 0.95 0.94 653
weighted avg 0.95 0.94 0.94 653
history = model.fit(X_train, y_train_new,
batch_size=64,
epochs=35,
steps_per_epoch=100,
validation_data=(X_valid, y_valid_new))
Epoch 1/35 37/100 ━━━━━━━━━━━━━━━━━━━━ 1:02 991ms/step - accuracy: 0.3305 - loss: 4.8506
C:\Users\yanch\anaconda3\Lib\contextlib.py:155: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 375ms/step - accuracy: 0.3932 - loss: 3.3018 - val_accuracy: 0.4466 - val_loss: 1.3470 Epoch 2/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 37s 368ms/step - accuracy: 0.5852 - loss: 0.9946 - val_accuracy: 0.3130 - val_loss: 1.3530 Epoch 3/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.6493 - loss: 0.8459 - val_accuracy: 0.3053 - val_loss: 1.4348 Epoch 4/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 369ms/step - accuracy: 0.6894 - loss: 0.7776 - val_accuracy: 0.3244 - val_loss: 1.5073 Epoch 5/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.7075 - loss: 0.7224 - val_accuracy: 0.3550 - val_loss: 1.4179 Epoch 6/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 378ms/step - accuracy: 0.7537 - loss: 0.6409 - val_accuracy: 0.3588 - val_loss: 1.4829 Epoch 7/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 368ms/step - accuracy: 0.7665 - loss: 0.5970 - val_accuracy: 0.4504 - val_loss: 1.2290 Epoch 8/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.8033 - loss: 0.5161 - val_accuracy: 0.4924 - val_loss: 1.2251 Epoch 9/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.8198 - loss: 0.4714 - val_accuracy: 0.5534 - val_loss: 1.1228 Epoch 10/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.8283 - loss: 0.4445 - val_accuracy: 0.7099 - val_loss: 0.7793 Epoch 11/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 45s 444ms/step - accuracy: 0.8432 - loss: 0.4166 - val_accuracy: 0.7519 - val_loss: 0.6910 Epoch 12/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 425ms/step - accuracy: 0.8540 - loss: 0.3607 - val_accuracy: 0.7366 - val_loss: 0.6631 Epoch 13/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 44s 432ms/step - accuracy: 0.8700 - loss: 0.3242 - val_accuracy: 0.7290 - val_loss: 0.7293 Epoch 14/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 410ms/step - accuracy: 0.8770 - loss: 0.3202 - val_accuracy: 0.8282 - val_loss: 0.4950 Epoch 15/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 47s 461ms/step - accuracy: 0.9050 - loss: 0.2347 - val_accuracy: 0.7710 - val_loss: 0.7423 Epoch 16/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 411ms/step - accuracy: 0.9131 - loss: 0.2261 - val_accuracy: 0.8473 - val_loss: 0.4228 Epoch 17/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 48s 474ms/step - accuracy: 0.9374 - loss: 0.1804 - val_accuracy: 0.8855 - val_loss: 0.3896 Epoch 18/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 46s 449ms/step - accuracy: 0.9257 - loss: 0.1934 - val_accuracy: 0.8893 - val_loss: 0.3702 Epoch 19/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 394ms/step - accuracy: 0.9385 - loss: 0.1609 - val_accuracy: 0.9046 - val_loss: 0.3787 Epoch 20/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 409ms/step - accuracy: 0.9522 - loss: 0.1312 - val_accuracy: 0.8969 - val_loss: 0.4051 Epoch 21/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 412ms/step - accuracy: 0.9461 - loss: 0.1377 - val_accuracy: 0.8435 - val_loss: 0.4281 Epoch 22/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 45s 441ms/step - accuracy: 0.9588 - loss: 0.1241 - val_accuracy: 0.8969 - val_loss: 0.3753 Epoch 23/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 391ms/step - accuracy: 0.9568 - loss: 0.1206 - val_accuracy: 0.9122 - val_loss: 0.3589 Epoch 24/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 383ms/step - accuracy: 0.9659 - loss: 0.0923 - val_accuracy: 0.9122 - val_loss: 0.3198 Epoch 25/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 401ms/step - accuracy: 0.9632 - loss: 0.1027 - val_accuracy: 0.8969 - val_loss: 0.3980 Epoch 26/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9676 - loss: 0.0879 - val_accuracy: 0.8969 - val_loss: 0.4138 Epoch 27/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9734 - loss: 0.0846 - val_accuracy: 0.8893 - val_loss: 0.4401 Epoch 28/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 381ms/step - accuracy: 0.9737 - loss: 0.0844 - val_accuracy: 0.9198 - val_loss: 0.3196 Epoch 29/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 370ms/step - accuracy: 0.9690 - loss: 0.0896 - val_accuracy: 0.8740 - val_loss: 0.4665 Epoch 30/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 376ms/step - accuracy: 0.9777 - loss: 0.0713 - val_accuracy: 0.9160 - val_loss: 0.3465 Epoch 31/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 378ms/step - accuracy: 0.9777 - loss: 0.0761 - val_accuracy: 0.9084 - val_loss: 0.3995 Epoch 32/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 382ms/step - accuracy: 0.9735 - loss: 0.0870 - val_accuracy: 0.9160 - val_loss: 0.3470 Epoch 33/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9787 - loss: 0.0633 - val_accuracy: 0.9198 - val_loss: 0.3858 Epoch 34/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9821 - loss: 0.0524 - val_accuracy: 0.9237 - val_loss: 0.3363 Epoch 35/35 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 381ms/step - accuracy: 0.9829 - loss: 0.0523 - val_accuracy: 0.9237 - val_loss: 0.3193
# Save the model
model.save('new_cnn_model_6.keras')
import matplotlib.pyplot as plt
# Plotting the training and validation loss
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss Over Epochs')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.savefig('training_validation_loss.png')
plt.show()
# Predict the val model
y_pred = model.predict(X_valid)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_valid, y_pred)
print('Val Accuracy = %.4f' % accuracy)
9/9 ━━━━━━━━━━━━━━━━━━━━ 1s 127ms/step Val Accuracy = 0.9237
# Predict the test model
y_pred = model.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_test, y_pred)
print('Test Accuracy = %.4f' % accuracy)
21/21 ━━━━━━━━━━━━━━━━━━━━ 3s 125ms/step Test Accuracy = 0.9280
print("Classification Report:\n",classification_report(Y_test, y_pred))
Classification Report:
precision recall f1-score support
0 0.90 0.93 0.92 219
1 0.95 0.86 0.90 187
2 0.87 0.94 0.91 87
3 0.98 1.00 0.99 160
accuracy 0.93 653
macro avg 0.92 0.93 0.93 653
weighted avg 0.93 0.93 0.93 653
Validation Accuracy: The model achieves a validation accuracy of 92.75%.
Test Accuracy: The test accuracy is even higher at 90.96%.
Precision and Recall: All classes show strong precision (85-96%) and recall (80-100%). This indicates that the model is not only correctly identifying positive cases but is also precise in its predictions, minimizing false positives.
F1-Score: High F1-scores across all classes (89-97%) suggest a balanced performance between precision and recall, which is crucial for reliable classification.
Accuracy Curve: The training accuracy plateaus close to 100%, while the validation accuracy stabilizes at a high level but with some gap compared to the training, indicating a slight overfitting but still within an acceptable range. Loss Curve: Training loss decreases sharply and flattens, which is ideal. However, the validation loss, despite decreasing, shows more fluctuations, which is typical but should be monitored to ensure it doesn't start to diverge from the training loss significantly.
I've implemented useful callbacks like EarlyStopping and ReduceLROnPlateau, which are beneficial for handling overfitting and optimizing the training process:
EarlyStopping is configured to monitor the training loss, stopping the training if there are no improvements beyond a minimal delta, indicating that continuing training is inefficient.
ReduceLROnPlateau reduces the learning rate when the validation loss stops improving, helping the model to fine-tune adjustments in weights and potentially escape local minima.
The EarlyStopping and ReduceLROnPlateau are both callbacks in Keras that serve as training interventions to improve the training process and prevent overfitting. Each of these has specific roles and is used to monitor different aspects of the model during training. Let’s delve into the goals and functionalities of each:
EarlyStopping Goal: To halt the training process early if there is no significant improvement in a specified metric over a defined number of epochs. This is particularly useful in avoiding overfitting and unnecessarily long training times.
ReduceLROnPlateau Goal: To reduce the learning rate when a metric has stopped improving. This helps the model to fine-tune and potentially escape local minima during training. Lowering the learning rate can allow the model to make smaller changes to the weights and potentially discover better minima.
# Stop training if loss doesn't keep decreasing.
model_es = EarlyStopping(monitor='loss', min_delta=1e-9, patience=12, verbose=True)
model_rlr = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=6, verbose=True)
history = model.fit(X_train, y_train_new, batch_size=64, epochs=100, validation_data=(X_valid, y_valid_new),
callbacks=[model_es, model_rlr])
Epoch 1/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 49s 1s/step - accuracy: 0.3408 - loss: 4.1512 - val_accuracy: 0.1641 - val_loss: 1.3980 - learning_rate: 0.0010 Epoch 2/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.6022 - loss: 0.9985 - val_accuracy: 0.3397 - val_loss: 1.3390 - learning_rate: 0.0010 Epoch 3/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 40s 1s/step - accuracy: 0.6243 - loss: 0.8712 - val_accuracy: 0.3588 - val_loss: 1.3417 - learning_rate: 0.0010 Epoch 4/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.6738 - loss: 0.7678 - val_accuracy: 0.3626 - val_loss: 1.5721 - learning_rate: 0.0010 Epoch 5/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.7683 - loss: 0.5974 - val_accuracy: 0.3588 - val_loss: 1.3576 - learning_rate: 0.0010 Epoch 6/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 38s 1s/step - accuracy: 0.7721 - loss: 0.5671 - val_accuracy: 0.4924 - val_loss: 1.1619 - learning_rate: 0.0010 Epoch 7/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 40s 1s/step - accuracy: 0.8083 - loss: 0.4945 - val_accuracy: 0.6565 - val_loss: 0.8985 - learning_rate: 0.0010 Epoch 8/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.8143 - loss: 0.4453 - val_accuracy: 0.6603 - val_loss: 0.8329 - learning_rate: 0.0010 Epoch 9/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.8614 - loss: 0.3480 - val_accuracy: 0.7214 - val_loss: 0.7044 - learning_rate: 0.0010 Epoch 10/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 40s 1s/step - accuracy: 0.8816 - loss: 0.3137 - val_accuracy: 0.7481 - val_loss: 0.6579 - learning_rate: 0.0010 Epoch 11/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 38s 1s/step - accuracy: 0.8578 - loss: 0.3504 - val_accuracy: 0.8092 - val_loss: 0.5405 - learning_rate: 0.0010 Epoch 12/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 40s 1s/step - accuracy: 0.9146 - loss: 0.2361 - val_accuracy: 0.8206 - val_loss: 0.5122 - learning_rate: 0.0010 Epoch 13/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9134 - loss: 0.2439 - val_accuracy: 0.8969 - val_loss: 0.4035 - learning_rate: 0.0010 Epoch 14/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 42s 1s/step - accuracy: 0.9283 - loss: 0.1944 - val_accuracy: 0.8931 - val_loss: 0.3910 - learning_rate: 0.0010 Epoch 15/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 45s 1s/step - accuracy: 0.9340 - loss: 0.2014 - val_accuracy: 0.8931 - val_loss: 0.3655 - learning_rate: 0.0010 Epoch 16/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 48s 1s/step - accuracy: 0.9562 - loss: 0.1278 - val_accuracy: 0.8893 - val_loss: 0.4169 - learning_rate: 0.0010 Epoch 17/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 48s 1s/step - accuracy: 0.9317 - loss: 0.1774 - val_accuracy: 0.9160 - val_loss: 0.3436 - learning_rate: 0.0010 Epoch 18/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 49s 1s/step - accuracy: 0.9531 - loss: 0.1303 - val_accuracy: 0.8893 - val_loss: 0.4071 - learning_rate: 0.0010 Epoch 19/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 42s 1s/step - accuracy: 0.9679 - loss: 0.1018 - val_accuracy: 0.9237 - val_loss: 0.3675 - learning_rate: 0.0010 Epoch 20/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 43s 1s/step - accuracy: 0.9721 - loss: 0.0828 - val_accuracy: 0.9237 - val_loss: 0.3545 - learning_rate: 0.0010 Epoch 21/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9722 - loss: 0.0815 - val_accuracy: 0.8550 - val_loss: 0.4985 - learning_rate: 0.0010 Epoch 22/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 40s 1s/step - accuracy: 0.9559 - loss: 0.1091 - val_accuracy: 0.9008 - val_loss: 0.4358 - learning_rate: 0.0010 Epoch 23/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 0s 1s/step - accuracy: 0.9783 - loss: 0.0598 Epoch 23: ReduceLROnPlateau reducing learning rate to 0.0003000000142492354. 37/37 ━━━━━━━━━━━━━━━━━━━━ 38s 1s/step - accuracy: 0.9783 - loss: 0.0600 - val_accuracy: 0.8931 - val_loss: 0.4360 - learning_rate: 0.0010 Epoch 24/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9809 - loss: 0.0616 - val_accuracy: 0.9237 - val_loss: 0.4380 - learning_rate: 3.0000e-04 Epoch 25/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 38s 1s/step - accuracy: 0.9850 - loss: 0.0484 - val_accuracy: 0.9198 - val_loss: 0.4186 - learning_rate: 3.0000e-04 Epoch 26/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9890 - loss: 0.0320 - val_accuracy: 0.9313 - val_loss: 0.4050 - learning_rate: 3.0000e-04 Epoch 27/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9886 - loss: 0.0322 - val_accuracy: 0.9275 - val_loss: 0.3922 - learning_rate: 3.0000e-04 Epoch 28/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 38s 1s/step - accuracy: 0.9924 - loss: 0.0293 - val_accuracy: 0.9275 - val_loss: 0.4017 - learning_rate: 3.0000e-04 Epoch 29/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 0s 1s/step - accuracy: 0.9883 - loss: 0.0285 Epoch 29: ReduceLROnPlateau reducing learning rate to 9.000000427477062e-05. 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9883 - loss: 0.0286 - val_accuracy: 0.9237 - val_loss: 0.4063 - learning_rate: 3.0000e-04 Epoch 30/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 38s 1s/step - accuracy: 0.9908 - loss: 0.0232 - val_accuracy: 0.9275 - val_loss: 0.4082 - learning_rate: 9.0000e-05 Epoch 31/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 41s 1s/step - accuracy: 0.9893 - loss: 0.0254 - val_accuracy: 0.9198 - val_loss: 0.4178 - learning_rate: 9.0000e-05 Epoch 32/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 38s 1s/step - accuracy: 0.9930 - loss: 0.0190 - val_accuracy: 0.9237 - val_loss: 0.4098 - learning_rate: 9.0000e-05 Epoch 33/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 41s 1s/step - accuracy: 0.9945 - loss: 0.0169 - val_accuracy: 0.9275 - val_loss: 0.4167 - learning_rate: 9.0000e-05 Epoch 34/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9964 - loss: 0.0160 - val_accuracy: 0.9237 - val_loss: 0.4290 - learning_rate: 9.0000e-05 Epoch 35/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 0s 1s/step - accuracy: 0.9949 - loss: 0.0200 Epoch 35: ReduceLROnPlateau reducing learning rate to 2.700000040931627e-05. 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9949 - loss: 0.0200 - val_accuracy: 0.9237 - val_loss: 0.4500 - learning_rate: 9.0000e-05 Epoch 36/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 38s 1s/step - accuracy: 0.9931 - loss: 0.0202 - val_accuracy: 0.9198 - val_loss: 0.4375 - learning_rate: 2.7000e-05 Epoch 37/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 39s 1s/step - accuracy: 0.9936 - loss: 0.0185 - val_accuracy: 0.9198 - val_loss: 0.4368 - learning_rate: 2.7000e-05 Epoch 38/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.9951 - loss: 0.0168 - val_accuracy: 0.9237 - val_loss: 0.4384 - learning_rate: 2.7000e-05 Epoch 39/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 44s 1s/step - accuracy: 0.9930 - loss: 0.0242 - val_accuracy: 0.9237 - val_loss: 0.4341 - learning_rate: 2.7000e-05 Epoch 40/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 48s 1s/step - accuracy: 0.9914 - loss: 0.0349 - val_accuracy: 0.9275 - val_loss: 0.4438 - learning_rate: 2.7000e-05 Epoch 41/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 0s 1s/step - accuracy: 0.9928 - loss: 0.0206 Epoch 41: ReduceLROnPlateau reducing learning rate to 8.100000013655517e-06. 37/37 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.9928 - loss: 0.0207 - val_accuracy: 0.9275 - val_loss: 0.4437 - learning_rate: 2.7000e-05 Epoch 42/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.9938 - loss: 0.0207 - val_accuracy: 0.9275 - val_loss: 0.4443 - learning_rate: 8.1000e-06 Epoch 43/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 48s 1s/step - accuracy: 0.9900 - loss: 0.0334 - val_accuracy: 0.9275 - val_loss: 0.4419 - learning_rate: 8.1000e-06 Epoch 44/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.9928 - loss: 0.0202 - val_accuracy: 0.9275 - val_loss: 0.4428 - learning_rate: 8.1000e-06 Epoch 45/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 42s 1s/step - accuracy: 0.9939 - loss: 0.0176 - val_accuracy: 0.9275 - val_loss: 0.4450 - learning_rate: 8.1000e-06 Epoch 46/100 37/37 ━━━━━━━━━━━━━━━━━━━━ 49s 1s/step - accuracy: 0.9941 - loss: 0.0169 - val_accuracy: 0.9275 - val_loss: 0.4445 - learning_rate: 8.1000e-06 Epoch 46: early stopping
# Save the model
# this is baseline model with rotation range = 20
model.save('new_cnn_model1.keras')
# Predict the val model
y_pred = model.predict(X_valid)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_valid, y_pred)
print('Val Accuracy = %.4f' % accuracy)
9/9 ━━━━━━━━━━━━━━━━━━━━ 1s 125ms/step Val Accuracy = 0.9275
# Predict the test model
y_pred = model.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_test, y_pred)
print('Test Accuracy = %.4f' % accuracy)
21/21 ━━━━━━━━━━━━━━━━━━━━ 2s 106ms/step Test Accuracy = 0.9096
print("Classification Report:\n",classification_report(Y_test, y_pred))
Classification Report:
precision recall f1-score support
0 0.85 0.93 0.89 198
1 0.91 0.80 0.85 183
2 0.96 0.90 0.93 104
3 0.95 1.00 0.97 168
accuracy 0.91 653
macro avg 0.92 0.91 0.91 653
weighted avg 0.91 0.91 0.91 653
Analysis:
High Specificity in Some Classes: The model is highly specific in recognizing pituitary tumors and generally good at identifying glioma tumors.
Challenges with Meningioma: There seems to be some confusion between meningioma and glioma tumors, which might require further investigation. Feature similarities between these types could be causing the model to struggle in differentiating them accurately.
Potential for Serious Misclassification: The misclassification between tumorous and non-tumorous scans, although low, is a critical error and should be minimized as much as possible.
labels = ['glioma_tumor', 'meningioma_tumor', 'no_tumor', 'pituitary_tumor']
# Define the custom color map
custom_colors = ['#01411C','#4B6F44','#4F7942','#74C365','#D0F0C0']
custom_cmap = matplotlib.colors.ListedColormap(custom_colors)
# Calculate confusion matrix
confusion_matrix = confusion_matrix(Y_test, y_pred)
# Create a display object with the custom color map
disp = ConfusionMatrixDisplay(confusion_matrix=confusion_matrix, display_labels = ['glioma_tumor', 'meningioma_tumor', 'no_tumor', 'pituitary_tumor'])
# Plot the confusion matrix
fig, ax = plt.subplots()
disp.plot(cmap=custom_cmap, ax=ax)
# Set the title and axis labels
fig.text(s='Heatmap of the Confusion Matrix',size=18,fontweight='bold',
fontname='monospace',color=colors_dark[1],y=0.92,x=0.10,alpha=0.8)
# Rotate x-axis labels
plt.xticks(rotation=45)
# Save the figure
plt.savefig('CM CNN-2.png', dpi=300, bbox_inches='tight')
# Show the plot
plt.show()
fig, ax = plt.subplots(1, 2, figsize=(10, 5), facecolor='white')
# Plot training and validation accuracy
ax[0].plot(history.history['accuracy'])
ax[0].plot(history.history['val_accuracy'])
ax[0].set_title('Model Accuracy')
ax[0].set_xlabel('Epoch')
ax[0].set_ylabel('Accuracy')
ax[0].legend(['Train', 'Validation'], loc='upper left')
# Plot training and validation loss
ax[1].plot(history.history['loss'])
ax[1].plot(history.history['val_loss'])
ax[1].set_title('Model Loss')
ax[1].set_xlabel('Epoch')
ax[1].set_ylabel('Loss')
ax[1].legend(['Train', 'Validation'], loc='upper right')
# Save the figure
plt.savefig('plot CNN-2.png', dpi=300, bbox_inches='tight')
plt.tight_layout()
plt.show()
from sklearn.metrics import roc_curve, roc_auc_score
import numpy as np
# Compute predicted probabilities for each class
y_probs = model.predict(X_test)
# Ensure that the target labels Y_test are in a 2-dimensional format
if len(Y_test.shape) == 1:
Y_test = np.eye(len(np.unique(Y_test)))[Y_test.astype(int)]
# Compute the ROC curve and AUC score for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(Y_test.shape[1]):
fpr[i], tpr[i], _ = roc_curve(Y_test[:, i], y_probs[:, i])
roc_auc[i] = roc_auc_score(Y_test[:, i], y_probs[:, i])
# Plot the ROC curve for each class
plt.figure()
for i in range(Y_test.shape[1]):
plt.plot(fpr[i], tpr[i], label=f'Class {i} (AUC = {roc_auc[i]:.2f})')
# Set the title and axis labels
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc='lower right')
# Save the figure
plt.savefig('ROC CNN-2.png', dpi=300, bbox_inches='tight')
# Show the plot
plt.show()
21/21 ━━━━━━━━━━━━━━━━━━━━ 2s 107ms/step
class_labels=['glioma_tumor', 'meningioma_tumor', 'no_tumor', 'pituitary_tumor']
plt.figure(figsize=(16,20))
for i in range(16):
plt.subplot(4,4,i+1)
plt.imshow(X_test[i])
actual_label_idx = np.argmax(Y_test[i]) # Assuming Y_test is one-hot encoded
predicted_label_idx = np.argmax(y_pred[i]) # Assuming y_pred is one-hot encoded
plt.title(f"Actual label:{class_labels[actual_label_idx]}\nPredicted label:{class_labels[predicted_label_idx]}")
plt.axis("off")
Adding L2 regularization.
Benefits:
Generalization: Regularization can improve model generalization, which means it might perform better on new, unseen data.
Model Complexity Management: It helps to manage model complexity by discouraging learning overly large weights.
from keras.models import Sequential
from keras.layers import InputLayer, Conv2D, BatchNormalization, MaxPooling2D, Dropout, Flatten, Dense
from keras.regularizers import l2
model = Sequential()
model.add(InputLayer(input_shape=(image_size, image_size, 3)))
# Add L2 regularization to convolutional layers
model.add(Conv2D(16, kernel_size=(5, 5), activation='relu', kernel_regularizer=l2(0.001)))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', kernel_regularizer=l2(0.001)))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', kernel_regularizer=l2(0.001)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(128, kernel_size=(3, 3), activation='relu', kernel_regularizer=l2(0.001)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(256, kernel_size=(3, 3), activation='relu', kernel_regularizer=l2(0.001)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
# Add L2 regularization to the dense layer
model.add(Dense(512, activation='relu', kernel_regularizer=l2(0.001)))
model.add(Dropout(0.2))
model.add(Dense(4, activation='softmax', kernel_regularizer=l2(0.001)))
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Summarize the model
model.summary()
C:\Users\yanch\anaconda3\Lib\site-packages\keras\src\layers\core\input_layer.py:25: UserWarning: Argument `input_shape` is deprecated. Use `shape` instead.
Model: "sequential_3"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ conv2d_15 (Conv2D) │ (None, 220, 220, 16) │ 1,216 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ batch_normalization_6 │ (None, 220, 220, 16) │ 64 │ │ (BatchNormalization) │ │ │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_15 (MaxPooling2D) │ (None, 110, 110, 16) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_15 (Dropout) │ (None, 110, 110, 16) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_16 (Conv2D) │ (None, 108, 108, 32) │ 4,640 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ batch_normalization_7 │ (None, 108, 108, 32) │ 128 │ │ (BatchNormalization) │ │ │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_16 (MaxPooling2D) │ (None, 54, 54, 32) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_17 (Conv2D) │ (None, 52, 52, 64) │ 18,496 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_17 (MaxPooling2D) │ (None, 26, 26, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_16 (Dropout) │ (None, 26, 26, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_18 (Conv2D) │ (None, 24, 24, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_18 (MaxPooling2D) │ (None, 12, 12, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_17 (Dropout) │ (None, 12, 12, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_19 (Conv2D) │ (None, 10, 10, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_19 (MaxPooling2D) │ (None, 5, 5, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_18 (Dropout) │ (None, 5, 5, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_3 (Flatten) │ (None, 6400) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_6 (Dense) │ (None, 512) │ 3,277,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_19 (Dropout) │ (None, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_7 (Dense) │ (None, 4) │ 2,052 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 3,672,932 (14.01 MB)
Trainable params: 3,672,836 (14.01 MB)
Non-trainable params: 96 (384.00 B)
# Stop training if loss doesn't keep decreasing.
model_es = EarlyStopping(monitor='loss', min_delta=1e-9, patience=12, verbose=True)
model_rlr = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=6, verbose=True)
history = model.fit(X_train, y_train_new, batch_size=64,
epochs=100, steps_per_epoch=100,
validation_data=(X_valid, y_valid_new),
callbacks=[model_es, model_rlr])
Epoch 1/100 37/100 ━━━━━━━━━━━━━━━━━━━━ 1:04 1s/step - accuracy: 0.3143 - loss: 5.8229
C:\Users\yanch\anaconda3\Lib\contextlib.py:155: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
100/100 ━━━━━━━━━━━━━━━━━━━━ 44s 386ms/step - accuracy: 0.3807 - loss: 4.3813 - val_accuracy: 0.3855 - val_loss: 2.4778 - learning_rate: 0.0010 Epoch 2/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 415ms/step - accuracy: 0.5980 - loss: 2.0368 - val_accuracy: 0.4008 - val_loss: 2.2304 - learning_rate: 0.0010 Epoch 3/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 392ms/step - accuracy: 0.6510 - loss: 1.8047 - val_accuracy: 0.4084 - val_loss: 2.1156 - learning_rate: 0.0010 Epoch 4/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 415ms/step - accuracy: 0.7183 - loss: 1.5772 - val_accuracy: 0.4695 - val_loss: 1.9620 - learning_rate: 0.0010 Epoch 5/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 389ms/step - accuracy: 0.7302 - loss: 1.4731 - val_accuracy: 0.3855 - val_loss: 2.2376 - learning_rate: 0.0010 Epoch 6/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.7735 - loss: 1.3210 - val_accuracy: 0.4962 - val_loss: 1.9483 - learning_rate: 0.0010 Epoch 7/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.7887 - loss: 1.2306 - val_accuracy: 0.5382 - val_loss: 1.7484 - learning_rate: 0.0010 Epoch 8/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 387ms/step - accuracy: 0.8212 - loss: 1.1324 - val_accuracy: 0.6718 - val_loss: 1.4281 - learning_rate: 0.0010 Epoch 9/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 386ms/step - accuracy: 0.8343 - loss: 1.0336 - val_accuracy: 0.6718 - val_loss: 1.4472 - learning_rate: 0.0010 Epoch 10/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 389ms/step - accuracy: 0.8700 - loss: 0.9473 - val_accuracy: 0.7290 - val_loss: 1.3150 - learning_rate: 0.0010 Epoch 11/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.8617 - loss: 0.9033 - val_accuracy: 0.7023 - val_loss: 1.3711 - learning_rate: 0.0010 Epoch 12/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.8725 - loss: 0.8711 - val_accuracy: 0.8282 - val_loss: 1.0082 - learning_rate: 0.0010 Epoch 13/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 383ms/step - accuracy: 0.8957 - loss: 0.7712 - val_accuracy: 0.8282 - val_loss: 0.9756 - learning_rate: 0.0010 Epoch 14/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9051 - loss: 0.7334 - val_accuracy: 0.7634 - val_loss: 1.1602 - learning_rate: 0.0010 Epoch 15/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9063 - loss: 0.7070 - val_accuracy: 0.8511 - val_loss: 0.9184 - learning_rate: 0.0010 Epoch 16/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9265 - loss: 0.6610 - val_accuracy: 0.8397 - val_loss: 0.8819 - learning_rate: 0.0010 Epoch 17/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9296 - loss: 0.6318 - val_accuracy: 0.8702 - val_loss: 0.8010 - learning_rate: 0.0010 Epoch 18/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 377ms/step - accuracy: 0.9358 - loss: 0.5939 - val_accuracy: 0.8435 - val_loss: 0.8295 - learning_rate: 0.0010 Epoch 19/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 393ms/step - accuracy: 0.9427 - loss: 0.5674 - val_accuracy: 0.8931 - val_loss: 0.8003 - learning_rate: 0.0010 Epoch 20/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 46s 454ms/step - accuracy: 0.9394 - loss: 0.5677 - val_accuracy: 0.7977 - val_loss: 0.8526 - learning_rate: 0.0010 Epoch 21/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 416ms/step - accuracy: 0.9512 - loss: 0.5083 - val_accuracy: 0.8626 - val_loss: 0.8103 - learning_rate: 0.0010 Epoch 22/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 47s 459ms/step - accuracy: 0.9578 - loss: 0.4868 - val_accuracy: 0.8969 - val_loss: 0.7518 - learning_rate: 0.0010 Epoch 23/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 46s 448ms/step - accuracy: 0.9518 - loss: 0.5010 - val_accuracy: 0.9160 - val_loss: 0.7285 - learning_rate: 0.0010 Epoch 24/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 390ms/step - accuracy: 0.9614 - loss: 0.4790 - val_accuracy: 0.9160 - val_loss: 0.7027 - learning_rate: 0.0010 Epoch 25/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9609 - loss: 0.4597 - val_accuracy: 0.8931 - val_loss: 0.7767 - learning_rate: 0.0010 Epoch 26/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9577 - loss: 0.4632 - val_accuracy: 0.9122 - val_loss: 0.6755 - learning_rate: 0.0010 Epoch 27/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 389ms/step - accuracy: 0.9716 - loss: 0.4179 - val_accuracy: 0.8740 - val_loss: 0.7738 - learning_rate: 0.0010 Epoch 28/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 378ms/step - accuracy: 0.9592 - loss: 0.4300 - val_accuracy: 0.8855 - val_loss: 0.7110 - learning_rate: 0.0010 Epoch 29/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9593 - loss: 0.4346 - val_accuracy: 0.9237 - val_loss: 0.7253 - learning_rate: 0.0010 Epoch 30/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9656 - loss: 0.4082 - val_accuracy: 0.9008 - val_loss: 0.6899 - learning_rate: 0.0010 Epoch 31/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9669 - loss: 0.4135 - val_accuracy: 0.9160 - val_loss: 0.7151 - learning_rate: 0.0010 Epoch 32/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9685 - loss: 0.3976 - val_accuracy: 0.9198 - val_loss: 0.6609 - learning_rate: 0.0010 Epoch 33/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 388ms/step - accuracy: 0.9679 - loss: 0.3811 - val_accuracy: 0.9084 - val_loss: 0.7834 - learning_rate: 0.0010 Epoch 34/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9753 - loss: 0.3528 - val_accuracy: 0.9122 - val_loss: 0.7179 - learning_rate: 0.0010 Epoch 35/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9718 - loss: 0.3669 - val_accuracy: 0.9084 - val_loss: 0.7089 - learning_rate: 0.0010 Epoch 36/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9750 - loss: 0.3534 - val_accuracy: 0.8817 - val_loss: 0.7503 - learning_rate: 0.0010 Epoch 37/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9754 - loss: 0.3506 - val_accuracy: 0.9084 - val_loss: 0.6314 - learning_rate: 0.0010 Epoch 38/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 380ms/step - accuracy: 0.9736 - loss: 0.3497 - val_accuracy: 0.9160 - val_loss: 0.6576 - learning_rate: 0.0010 Epoch 39/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9765 - loss: 0.3485 - val_accuracy: 0.9008 - val_loss: 0.6937 - learning_rate: 0.0010 Epoch 40/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 412ms/step - accuracy: 0.9715 - loss: 0.3521 - val_accuracy: 0.8855 - val_loss: 0.6716 - learning_rate: 0.0010 Epoch 41/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 376ms/step - accuracy: 0.9764 - loss: 0.3356 - val_accuracy: 0.9160 - val_loss: 0.6221 - learning_rate: 0.0010 Epoch 42/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9752 - loss: 0.3409 - val_accuracy: 0.9122 - val_loss: 0.6489 - learning_rate: 0.0010 Epoch 43/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9813 - loss: 0.3226 - val_accuracy: 0.9237 - val_loss: 0.6284 - learning_rate: 0.0010 Epoch 44/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9796 - loss: 0.3198 - val_accuracy: 0.9008 - val_loss: 0.6711 - learning_rate: 0.0010 Epoch 45/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9738 - loss: 0.3216 - val_accuracy: 0.8435 - val_loss: 0.7575 - learning_rate: 0.0010 Epoch 46/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9753 - loss: 0.3162 - val_accuracy: 0.8855 - val_loss: 0.6177 - learning_rate: 0.0010 Epoch 47/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 382ms/step - accuracy: 0.9722 - loss: 0.3363 - val_accuracy: 0.9275 - val_loss: 0.5672 - learning_rate: 0.0010 Epoch 48/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9820 - loss: 0.3100 - val_accuracy: 0.9237 - val_loss: 0.5777 - learning_rate: 0.0010 Epoch 49/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9848 - loss: 0.2854 - val_accuracy: 0.9008 - val_loss: 0.6002 - learning_rate: 0.0010 Epoch 50/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 386ms/step - accuracy: 0.9805 - loss: 0.2899 - val_accuracy: 0.9046 - val_loss: 0.5902 - learning_rate: 0.0010 Epoch 51/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9803 - loss: 0.2887 - val_accuracy: 0.9046 - val_loss: 0.6105 - learning_rate: 0.0010 Epoch 52/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 392ms/step - accuracy: 0.9824 - loss: 0.2841 - val_accuracy: 0.8969 - val_loss: 0.6389 - learning_rate: 0.0010 Epoch 53/100 37/100 ━━━━━━━━━━━━━━━━━━━━ 1:03 1s/step - accuracy: 0.9728 - loss: 0.3079 Epoch 53: ReduceLROnPlateau reducing learning rate to 0.0003000000142492354. 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 377ms/step - accuracy: 0.9754 - loss: 0.3020 - val_accuracy: 0.9084 - val_loss: 0.5821 - learning_rate: 0.0010 Epoch 54/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9840 - loss: 0.2696 - val_accuracy: 0.9275 - val_loss: 0.5468 - learning_rate: 3.0000e-04 Epoch 55/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9914 - loss: 0.2512 - val_accuracy: 0.9313 - val_loss: 0.5521 - learning_rate: 3.0000e-04 Epoch 56/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9887 - loss: 0.2498 - val_accuracy: 0.9275 - val_loss: 0.5470 - learning_rate: 3.0000e-04 Epoch 57/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9952 - loss: 0.2368 - val_accuracy: 0.9237 - val_loss: 0.5668 - learning_rate: 3.0000e-04 Epoch 58/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 376ms/step - accuracy: 0.9928 - loss: 0.2343 - val_accuracy: 0.9313 - val_loss: 0.5632 - learning_rate: 3.0000e-04 Epoch 59/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9913 - loss: 0.2327 - val_accuracy: 0.9275 - val_loss: 0.5522 - learning_rate: 3.0000e-04 Epoch 60/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 378ms/step - accuracy: 0.9933 - loss: 0.2235 - val_accuracy: 0.9237 - val_loss: 0.5410 - learning_rate: 3.0000e-04 Epoch 61/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9973 - loss: 0.2161 - val_accuracy: 0.9313 - val_loss: 0.5663 - learning_rate: 3.0000e-04 Epoch 62/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9948 - loss: 0.2142 - val_accuracy: 0.9198 - val_loss: 0.5619 - learning_rate: 3.0000e-04 Epoch 63/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 376ms/step - accuracy: 0.9972 - loss: 0.2108 - val_accuracy: 0.9198 - val_loss: 0.5651 - learning_rate: 3.0000e-04 Epoch 64/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9922 - loss: 0.2144 - val_accuracy: 0.9275 - val_loss: 0.5506 - learning_rate: 3.0000e-04 Epoch 65/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 380ms/step - accuracy: 0.9953 - loss: 0.2075 - val_accuracy: 0.9237 - val_loss: 0.5695 - learning_rate: 3.0000e-04 Epoch 66/100 37/100 ━━━━━━━━━━━━━━━━━━━━ 1:04 1s/step - accuracy: 0.9960 - loss: 0.2067 Epoch 66: ReduceLROnPlateau reducing learning rate to 9.000000427477062e-05. 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 381ms/step - accuracy: 0.9956 - loss: 0.2080 - val_accuracy: 0.9160 - val_loss: 0.5567 - learning_rate: 3.0000e-04 Epoch 67/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9961 - loss: 0.2001 - val_accuracy: 0.9198 - val_loss: 0.5545 - learning_rate: 9.0000e-05 Epoch 68/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 376ms/step - accuracy: 0.9969 - loss: 0.1975 - val_accuracy: 0.9160 - val_loss: 0.5502 - learning_rate: 9.0000e-05 Epoch 69/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9947 - loss: 0.2018 - val_accuracy: 0.9198 - val_loss: 0.5468 - learning_rate: 9.0000e-05 Epoch 70/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9919 - loss: 0.2042 - val_accuracy: 0.9275 - val_loss: 0.5335 - learning_rate: 9.0000e-05 Epoch 71/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9953 - loss: 0.1980 - val_accuracy: 0.9313 - val_loss: 0.5100 - learning_rate: 9.0000e-05 Epoch 72/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 376ms/step - accuracy: 0.9951 - loss: 0.1970 - val_accuracy: 0.9313 - val_loss: 0.5107 - learning_rate: 9.0000e-05 Epoch 73/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 381ms/step - accuracy: 0.9942 - loss: 0.1991 - val_accuracy: 0.9313 - val_loss: 0.5147 - learning_rate: 9.0000e-05 Epoch 74/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9966 - loss: 0.1940 - val_accuracy: 0.9313 - val_loss: 0.5236 - learning_rate: 9.0000e-05 Epoch 75/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 380ms/step - accuracy: 0.9954 - loss: 0.1972 - val_accuracy: 0.9313 - val_loss: 0.5174 - learning_rate: 9.0000e-05 Epoch 76/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9975 - loss: 0.1882 - val_accuracy: 0.9275 - val_loss: 0.5243 - learning_rate: 9.0000e-05 Epoch 77/100 37/100 ━━━━━━━━━━━━━━━━━━━━ 1:03 1s/step - accuracy: 0.9970 - loss: 0.1895 Epoch 77: ReduceLROnPlateau reducing learning rate to 2.700000040931627e-05. 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9970 - loss: 0.1895 - val_accuracy: 0.9275 - val_loss: 0.5184 - learning_rate: 9.0000e-05 Epoch 78/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9960 - loss: 0.1912 - val_accuracy: 0.9313 - val_loss: 0.5154 - learning_rate: 2.7000e-05 Epoch 79/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9955 - loss: 0.1916 - val_accuracy: 0.9313 - val_loss: 0.5165 - learning_rate: 2.7000e-05 Epoch 80/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9938 - loss: 0.1947 - val_accuracy: 0.9275 - val_loss: 0.5173 - learning_rate: 2.7000e-05 Epoch 81/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9946 - loss: 0.1927 - val_accuracy: 0.9275 - val_loss: 0.5151 - learning_rate: 2.7000e-05 Epoch 82/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 406ms/step - accuracy: 0.9976 - loss: 0.1864 - val_accuracy: 0.9275 - val_loss: 0.5117 - learning_rate: 2.7000e-05 Epoch 83/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9954 - loss: 0.1896 - val_accuracy: 0.9275 - val_loss: 0.5085 - learning_rate: 2.7000e-05 Epoch 84/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9959 - loss: 0.1898 - val_accuracy: 0.9313 - val_loss: 0.5095 - learning_rate: 2.7000e-05 Epoch 85/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9949 - loss: 0.1882 - val_accuracy: 0.9313 - val_loss: 0.5103 - learning_rate: 2.7000e-05 Epoch 86/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9961 - loss: 0.1900 - val_accuracy: 0.9313 - val_loss: 0.5059 - learning_rate: 2.7000e-05 Epoch 87/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9944 - loss: 0.1908 - val_accuracy: 0.9275 - val_loss: 0.5031 - learning_rate: 2.7000e-05 Epoch 88/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 383ms/step - accuracy: 0.9972 - loss: 0.1882 - val_accuracy: 0.9275 - val_loss: 0.5051 - learning_rate: 2.7000e-05 Epoch 89/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 401ms/step - accuracy: 0.9957 - loss: 0.1891 - val_accuracy: 0.9275 - val_loss: 0.5059 - learning_rate: 2.7000e-05 Epoch 90/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9952 - loss: 0.1877 - val_accuracy: 0.9198 - val_loss: 0.5100 - learning_rate: 2.7000e-05 Epoch 91/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9952 - loss: 0.1877 - val_accuracy: 0.9198 - val_loss: 0.5103 - learning_rate: 2.7000e-05 Epoch 92/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 378ms/step - accuracy: 0.9966 - loss: 0.1856 - val_accuracy: 0.9275 - val_loss: 0.5111 - learning_rate: 2.7000e-05 Epoch 93/100 37/100 ━━━━━━━━━━━━━━━━━━━━ 1:02 993ms/step - accuracy: 0.9981 - loss: 0.1823 Epoch 93: ReduceLROnPlateau reducing learning rate to 8.100000013655517e-06. 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9974 - loss: 0.1832 - val_accuracy: 0.9275 - val_loss: 0.5124 - learning_rate: 2.7000e-05 Epoch 94/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9953 - loss: 0.1870 - val_accuracy: 0.9275 - val_loss: 0.5111 - learning_rate: 8.1000e-06 Epoch 95/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9951 - loss: 0.1856 - val_accuracy: 0.9275 - val_loss: 0.5109 - learning_rate: 8.1000e-06 Epoch 96/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 422ms/step - accuracy: 0.9968 - loss: 0.1833 - val_accuracy: 0.9275 - val_loss: 0.5102 - learning_rate: 8.1000e-06 Epoch 97/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 398ms/step - accuracy: 0.9954 - loss: 0.1842 - val_accuracy: 0.9275 - val_loss: 0.5112 - learning_rate: 8.1000e-06 Epoch 98/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9945 - loss: 0.1913 - val_accuracy: 0.9275 - val_loss: 0.5095 - learning_rate: 8.1000e-06 Epoch 99/100 37/100 ━━━━━━━━━━━━━━━━━━━━ 1:02 993ms/step - accuracy: 0.9898 - loss: 0.1934 Epoch 99: ReduceLROnPlateau reducing learning rate to 2.429999949526973e-06. 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.9917 - loss: 0.1902 - val_accuracy: 0.9275 - val_loss: 0.5102 - learning_rate: 8.1000e-06 Epoch 100/100 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9960 - loss: 0.1852 - val_accuracy: 0.9275 - val_loss: 0.5095 - learning_rate: 2.4300e-06
# Save the model
model.save('new_cnn_model_4.keras')
import matplotlib.pyplot as plt
# Plotting the training and validation loss
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss Over Epochs')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.savefig('training_validation_loss.png')
plt.show()
# Predict the val model
y_pred = model.predict(X_valid)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_valid, y_pred)
print('Val Accuracy = %.4f' % accuracy)
9/9 ━━━━━━━━━━━━━━━━━━━━ 1s 129ms/step Val Accuracy = 0.9275
# Predict the test model
y_pred = model.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_test, y_pred)
print('Test Accuracy = %.4f' % accuracy)
21/21 ━━━━━━━━━━━━━━━━━━━━ 2s 112ms/step Test Accuracy = 0.9311
print("Classification Report:\n",classification_report(Y_test, y_pred))
Classification Report:
precision recall f1-score support
0 0.93 0.90 0.92 219
1 0.91 0.89 0.90 187
2 0.87 0.95 0.91 87
3 0.98 1.00 0.99 160
accuracy 0.93 653
macro avg 0.93 0.94 0.93 653
weighted avg 0.93 0.93 0.93 653
history = model.fit(X_train, y_train_new,
batch_size=64,
epochs=50,
steps_per_epoch=100,
validation_data=(X_valid, y_valid_new))
Epoch 1/50 37/100 ━━━━━━━━━━━━━━━━━━━━ 1:02 993ms/step - accuracy: 0.3502 - loss: 4.9467
C:\Users\yanch\anaconda3\Lib\contextlib.py:155: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 378ms/step - accuracy: 0.4042 - loss: 3.8348 - val_accuracy: 0.3740 - val_loss: 2.4503 Epoch 2/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 389ms/step - accuracy: 0.5966 - loss: 2.0031 - val_accuracy: 0.3321 - val_loss: 2.2479 Epoch 3/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 44s 433ms/step - accuracy: 0.6747 - loss: 1.7134 - val_accuracy: 0.4160 - val_loss: 2.0419 Epoch 4/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 43s 421ms/step - accuracy: 0.7260 - loss: 1.5040 - val_accuracy: 0.5115 - val_loss: 1.8385 Epoch 5/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 401ms/step - accuracy: 0.7534 - loss: 1.3519 - val_accuracy: 0.5763 - val_loss: 1.7061 Epoch 6/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.7812 - loss: 1.2427 - val_accuracy: 0.6527 - val_loss: 1.5581 Epoch 7/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.8054 - loss: 1.1485 - val_accuracy: 0.6260 - val_loss: 1.5350 Epoch 8/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 371ms/step - accuracy: 0.8025 - loss: 1.0647 - val_accuracy: 0.8015 - val_loss: 1.1499 Epoch 9/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.8462 - loss: 0.9407 - val_accuracy: 0.8053 - val_loss: 1.1268 Epoch 10/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.8593 - loss: 0.9083 - val_accuracy: 0.7939 - val_loss: 1.1076 Epoch 11/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 370ms/step - accuracy: 0.8843 - loss: 0.8041 - val_accuracy: 0.8359 - val_loss: 0.9391 Epoch 12/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 387ms/step - accuracy: 0.8960 - loss: 0.7536 - val_accuracy: 0.8092 - val_loss: 0.9475 Epoch 13/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 409ms/step - accuracy: 0.9193 - loss: 0.6858 - val_accuracy: 0.8092 - val_loss: 0.9896 Epoch 14/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 399ms/step - accuracy: 0.9322 - loss: 0.6511 - val_accuracy: 0.7443 - val_loss: 1.1384 Epoch 15/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9134 - loss: 0.6781 - val_accuracy: 0.8855 - val_loss: 0.7832 Epoch 16/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9331 - loss: 0.5934 - val_accuracy: 0.8626 - val_loss: 0.8133 Epoch 17/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9401 - loss: 0.5605 - val_accuracy: 0.8435 - val_loss: 0.8221 Epoch 18/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 369ms/step - accuracy: 0.9538 - loss: 0.5268 - val_accuracy: 0.9160 - val_loss: 0.6501 Epoch 19/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 378ms/step - accuracy: 0.9569 - loss: 0.5063 - val_accuracy: 0.9008 - val_loss: 0.7429 Epoch 20/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9646 - loss: 0.4758 - val_accuracy: 0.8702 - val_loss: 0.7847 Epoch 21/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9629 - loss: 0.4607 - val_accuracy: 0.9084 - val_loss: 0.6887 Epoch 22/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 53s 521ms/step - accuracy: 0.9530 - loss: 0.4843 - val_accuracy: 0.9237 - val_loss: 0.6813 Epoch 23/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 385ms/step - accuracy: 0.9563 - loss: 0.4863 - val_accuracy: 0.9084 - val_loss: 0.6225 Epoch 24/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9638 - loss: 0.4488 - val_accuracy: 0.9122 - val_loss: 0.6763 Epoch 25/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9611 - loss: 0.4338 - val_accuracy: 0.8817 - val_loss: 0.7527 Epoch 26/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9603 - loss: 0.4366 - val_accuracy: 0.8817 - val_loss: 0.6848 Epoch 27/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 386ms/step - accuracy: 0.9667 - loss: 0.4161 - val_accuracy: 0.9084 - val_loss: 0.6623 Epoch 28/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 374ms/step - accuracy: 0.9701 - loss: 0.3959 - val_accuracy: 0.8931 - val_loss: 0.6980 Epoch 29/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 377ms/step - accuracy: 0.9765 - loss: 0.3701 - val_accuracy: 0.9008 - val_loss: 0.6563 Epoch 30/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9820 - loss: 0.3497 - val_accuracy: 0.9198 - val_loss: 0.6643 Epoch 31/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 376ms/step - accuracy: 0.9632 - loss: 0.3976 - val_accuracy: 0.9313 - val_loss: 0.6321 Epoch 32/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9759 - loss: 0.3665 - val_accuracy: 0.9275 - val_loss: 0.6340 Epoch 33/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 67s 655ms/step - accuracy: 0.9830 - loss: 0.3416 - val_accuracy: 0.9313 - val_loss: 0.5801 Epoch 34/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 66s 653ms/step - accuracy: 0.9789 - loss: 0.3402 - val_accuracy: 0.9198 - val_loss: 0.5935 Epoch 35/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 42s 411ms/step - accuracy: 0.9747 - loss: 0.3526 - val_accuracy: 0.9313 - val_loss: 0.5860 Epoch 36/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9830 - loss: 0.3234 - val_accuracy: 0.9198 - val_loss: 0.6346 Epoch 37/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9786 - loss: 0.3260 - val_accuracy: 0.9160 - val_loss: 0.6723 Epoch 38/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 380ms/step - accuracy: 0.9822 - loss: 0.3232 - val_accuracy: 0.9046 - val_loss: 0.6240 Epoch 39/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9774 - loss: 0.3307 - val_accuracy: 0.9160 - val_loss: 0.6046 Epoch 40/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 375ms/step - accuracy: 0.9773 - loss: 0.3313 - val_accuracy: 0.9046 - val_loss: 0.5940 Epoch 41/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 380ms/step - accuracy: 0.9815 - loss: 0.3187 - val_accuracy: 0.8969 - val_loss: 0.6778 Epoch 42/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9810 - loss: 0.3098 - val_accuracy: 0.8779 - val_loss: 0.7394 Epoch 43/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 380ms/step - accuracy: 0.9810 - loss: 0.3044 - val_accuracy: 0.9198 - val_loss: 0.6479 Epoch 44/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 372ms/step - accuracy: 0.9892 - loss: 0.2962 - val_accuracy: 0.9275 - val_loss: 0.5502 Epoch 45/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 379ms/step - accuracy: 0.9809 - loss: 0.3014 - val_accuracy: 0.9008 - val_loss: 0.6554 Epoch 46/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9808 - loss: 0.2938 - val_accuracy: 0.9237 - val_loss: 0.5517 Epoch 47/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 38s 373ms/step - accuracy: 0.9797 - loss: 0.2964 - val_accuracy: 0.9313 - val_loss: 0.5186 Epoch 48/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 39s 380ms/step - accuracy: 0.9811 - loss: 0.2918 - val_accuracy: 0.9237 - val_loss: 0.5346 Epoch 49/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 40s 394ms/step - accuracy: 0.9754 - loss: 0.3166 - val_accuracy: 0.9198 - val_loss: 0.5423 Epoch 50/50 100/100 ━━━━━━━━━━━━━━━━━━━━ 41s 407ms/step - accuracy: 0.9779 - loss: 0.2999 - val_accuracy: 0.9237 - val_loss: 0.5079
# Save the model
model.save('new_cnn_model_5.keras')
import matplotlib.pyplot as plt
# Plotting the training and validation loss
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss Over Epochs')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.savefig('training_validation_loss.png')
plt.show()
# Predict the val model
y_pred = model.predict(X_valid)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_valid, y_pred)
print('Val Accuracy = %.4f' % accuracy)
9/9 ━━━━━━━━━━━━━━━━━━━━ 1s 127ms/step Val Accuracy = 0.9237
# Predict the test model
y_pred = model.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
# Calculate accuracy
accuracy = accuracy_score(Y_test, y_pred)
print('Test Accuracy = %.4f' % accuracy)
21/21 ━━━━━━━━━━━━━━━━━━━━ 2s 110ms/step Test Accuracy = 0.9296
print("Classification Report:\n",classification_report(Y_test, y_pred))
Classification Report:
precision recall f1-score support
0 0.94 0.91 0.93 219
1 0.90 0.91 0.90 187
2 0.91 0.90 0.90 87
3 0.96 0.99 0.98 160
accuracy 0.93 653
macro avg 0.93 0.93 0.93 653
weighted avg 0.93 0.93 0.93 653
Architecture 1 Run 3 has the highest accuracy
import pandas as pd
# Define data for Architecture 1
data1 = {
'Steps per Epoch': [5, 50, 100, 100, 37],
'Epochs': [10, 50, 50, 35, 100],
'Accuracy': [0.245, 0.9173, 0.9449, 0.928, 0.9096],
'Notes': ['', '', '', '', 'early stopped at epoch = 46']
}
# Define data for Architecture 2
data2 = {
'Steps per Epoch': [100, 100],
'Epochs': [100, 50],
'Accuracy': [0.9311, 0.9296],
'Notes': ['did not trigger early stop, run all 100 epochs', '']
}
# Create DataFrames
df1 = pd.DataFrame(data1, index=[f"Architecture 1 - Run {i+1}" for i in range(len(data1['Steps per Epoch']))])
df2 = pd.DataFrame(data2, index=[f"Architecture 2 (L2 Regularization) - Run {i+1}" for i in range(len(data2['Steps per Epoch']))])
# Concatenate both DataFrames
df = pd.concat([df1, df2])
df
| Steps per Epoch | Epochs | Accuracy | Notes | |
|---|---|---|---|---|
| Architecture 1 - Run 1 | 5 | 10 | 0.2450 | |
| Architecture 1 - Run 2 | 50 | 50 | 0.9173 | |
| Architecture 1 - Run 3 | 100 | 50 | 0.9449 | |
| Architecture 1 - Run 4 | 100 | 35 | 0.9280 | |
| Architecture 1 - Run 5 | 37 | 100 | 0.9096 | early stopped at epoch = 46 |
| Architecture 2 (L2 Regularization) - Run 1 | 100 | 100 | 0.9311 | did not trigger early stop, run all 100 epochs |
| Architecture 2 (L2 Regularization) - Run 2 | 100 | 50 | 0.9296 |
Model Performance Analysis
Training and Validation Loss: The plot shows that while the training loss has consistently decreased and flattened (indicating good learning), the validation loss has some fluctuations but generally follows the training loss closely without diverging too much. This suggests that the model is not overfitting significantly.
Validation Accuracy: Peaked at approximately 95.04% during training, which is quite high. Test Accuracy: Even higher at 94.49%. This consistency between validation and test accuracy is a good sign of the model's ability to generalize well.
Precision and Recall: Very high across all classes, with Class 3 achieving perfect recall (1.00). This indicates that the model is very effective in identifying true positives for Class 3 without any false negatives.
F1-Score: Also high across all classes, suggesting a good balance between precision and recall. The weighted averages for accuracy, precision, recall, and F1-score are all above 0.94, which is excellent.
Observations
Model Stability: The model demonstrates stable performance across metrics, which is indicative of robust learning capabilities.
Loss Fluctuations: The fluctuations in validation loss could be indicative of potential minor overfitting or could simply be a result of the model navigating through complex loss landscapes. However, as they do not diverge significantly, this is not a major concern currently.
Augmentation for Underrepresented Classes: Increase the number of augmented images for the underrepresented class (no_tumor) to balance the dataset.
Class Weights: Utilize class weights in the model training process to give more importance to underrepresented classes during the loss calculation.
Oversampling/Undersampling: Consider oversampling the minority class or undersampling the majority classes.
The current augmentation strategy is robust, but we can experiment with less aggressive transformations for brain images, where orientation and structure are important. For example, a high rotation range might not be appropriate as brain tumors and their structures could be highly orientation-specific.
Depth and Complexity: As you're dealing with complex medical images, consider gradually increasing the complexity of the CNN. Incorporate deeper layers or additional convolutional blocks to capture more complex features.
Advanced Architectures: Explore more sophisticated architectures like ResNet, Inception, or DenseNet, which might be more effective for medical image analysis due to their deeper and more complex structures.
Once deployed, continuously monitor the model’s performance and establish a feedback loop with medical professionals to collect insights and further improve the model.
Before full deployment, ensure that the model undergoes thorough clinical validation to meet regulatory standards and to confirm that it performs well across different demographics and equipment variations.